本文提供了有关在机器学习(ML)模型中实现公平性的偏置缓解方法的全面调查。我们总共收集了234个有关ML分类器偏置缓解的出版物。这些方法可以根据其干预程序(即预处理,进行内部处理,后处理)及其应用的技术来区分。我们研究了文献中如何评估现有的缓解方法。特别是,我们考虑数据集,指标和基准测试。根据收集的见解(例如,最受欢迎的公平度量是什么?用于评估偏置缓解方法的数据集?)。我们希望在开发和评估新的缓解方法时支持从业者做出明智的选择。
translated by 谷歌翻译
软件偏见是软件工程师越来越重要的操作问题。我们提出了17种代表性缓解方法的大规模,全面的经验评估,该方法通过1​​2个机器学习(ML)绩效指标,4项公平度量指标和24种类型的公平性 - 性能权衡评估,应用于8种广泛采用的公平性折衷评估基准软件决策/预测任务。与以前在此重要的操作软件特征上的工作相比,经验覆盖范围是全面的,涵盖了最多的偏见缓解方法,评估指标和公平性的绩效权衡措施。我们发现(1)偏置缓解方法大大降低了所有ML性能指标(包括先前工作中未考虑的指标)所报告的值,在很大一部分的情况下(根据不同的ML性能指标为42%〜75%) ; (2)在所有情况和指标中,偏置缓解方法仅在约50%的情况下获得公平性改善(根据用于评估偏见/公平性的指标,介于29%〜59%之间); (3)缓解偏见的方法的表现不佳,甚至导致37%的情况下的公平性和ML性能下降; (4)缓解偏差方法的有效性取决于任务,模型,公平性和ML性能指标,并且没有证明对所有研究的情况有效的“银弹”缓解方法。在仅29%的方案中,我们发现优于其他方法的最佳缓解方法。我们已公开提供本研究中使用的脚本和数据,以便将来复制和扩展我们的工作。
translated by 谷歌翻译
了解公众关于紧急使用未经证实的治疗剂的论述对于监视安全使用和打击错误信息至关重要。我们开发了一种基于自然语言处理(NLP)的管道,以了解公众对COVID-19与19与COVID相关药物的立场的看法。这项回顾性研究包括2020年1月29日,2020年至2021年11月30日之间的609,189个基于美国的推文,涉及四种药物,这些药物在19日期期间在流行期间引起了广泛关注:1)羟基氯喹和伊维菌素,毒品疗法,具有轶事证据; 2)Molnupiravir和Remdesivir,适合合格患者的FDA批准的治疗选择。时间趋势分析用于了解受欢迎程度和相关事件。进行了内容和人口统计分析,以探讨人们对每种药物的立场的潜在理由。时间趋势分析表明,羟氯喹和伊维菌素的讨论比Molnupiravir和Remdesivir更多,尤其是在Covid-19-19潮中期。羟氯喹和伊维菌素高度政治化,与阴谋论,传闻,名人效应等有关。美国两个主要政党之间立场的分布大不相同(p <0.001);共和党人比民主党人更有可能支持羟氯喹(+55%)和伊维菌素(+30%)。具有医疗保健背景的人倾向于比普通人群多反对羟氯喹(+7%)。相比之下,普通人群更有可能支持伊维菌素(+14%)。我们在https://github.com/ningkko/covid-drug上提供所有数据,代码和模型。
translated by 谷歌翻译
本工作详细介绍了3D级不变特征变换(SIFT)算法的高效实现,用于从大组体积的体积图像数据的机器学习的目的。 3D SIFT代码的主要操作在图形处理单元(GPU)上实现,包括从刻度空间金字塔的卷积,子采样和4D峰值检测。使用3D MRI人脑体积的不同人的3D MRI人脑体积来量化性能改进。基于二进制强大的独立基本特征(简要)代码提出了计算有效的3D Keypoint描述符,包括新颖的描述符,我们调用排名强大的独立基本特征(Rrief),并与原始3D Sift-andal方法\ CITEP {Toews2013 effity}相比。 。 GPU实现提供了超出优化CPU实现的大约7倍的加速,其中33秒到0.2秒,用于具有大约3000个关键点的3D尺寸(145,174,145)体素的3D卷到0.2秒。值得注意的加速包括卷积操作(20x),4d峰值检测(3x),子采样(3x)和高斯金字塔结构(2x)。高效的描述符与标准SIFT-RANDS描述符相比,使用2x的加速和6倍的内存节省,以减少的关键点对应关系,在计算效率和算法性能之间揭示折衷。我们实现的加速将允许对较大数据集进行更有效的分析。我们的优化GPU实现了3D Sift-Rank Extractor的HTTPS://github.com/carluerjb/3d_sift_cuda可用。
translated by 谷歌翻译
同时发展机器人的形态(体)和控制器(大脑)可能导致后代遗传体和大脑之间的不匹配。为了缓解这个问题,相对较早地提出了通过所谓的生活框架的所谓的生命框架的学习期。但是,实证评估仍缺乏迄今为止。在本文中,我们研究了这种学习机制与不同视角的影响。使用广泛的模拟,我们认为,与纯粹的进化方法相比,学习可以大大提高任务性能并减少一定适合水平所需的几代人数。此外,虽然学习只直接影响控制器,但我们证明了进化的形态也将是不同的。这提供了定量演示,即大脑的变化可以诱导体内的变化。最后,我们研究了给定体学习的能力量化的形态智力的概念。我们观察到学习三角洲,继承与学习大脑之间的性能差异,在整个进化过程中都在增长。这表明演化正在生产具有越来越多的可塑性的机器人,即连续几代变得越来越好,更好的学习者,这反过来使它们更好,在给定的任务中更好地更好。总而言之,我们的结果表明,生活的三角形不仅是理论兴趣的概念,而且是一种具有实际好处的系统架构。
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
As natural language processing (NLP) for gender bias becomes a significant interdisciplinary topic, the prevalent data-driven techniques such as large-scale language models suffer from data inadequacy and biased corpus, especially for languages with insufficient resources such as Chinese. To this end, we propose a Chinese cOrpus foR Gender bIas Probing and Mitigation CORGI-PM, which contains 32.9k sentences with high-quality labels derived by following an annotation scheme specifically developed for gender bias in the Chinese context. Moreover, we address three challenges for automatic textual gender bias mitigation, which requires the models to detect, classify, and mitigate textual gender bias. We also conduct experiments with state-of-the-art language models to provide baselines. To our best knowledge, CORGI-PM is the first sentence-level Chinese corpus for gender bias probing and mitigation.
translated by 谷歌翻译
Considering the computation complexity, we propose a Guided Hybrid Quantization with One-to-one Self-Teaching (GHOST}) framework. More concretely, we first design a structure called guided quantization self-distillation (GQSD), which is an innovative idea for realizing lightweight through the synergy of quantization and distillation. The training process of the quantization model is guided by its full-precision model, which is time-saving and cost-saving without preparing a huge pre-trained model in advance. Second, we put forward a hybrid quantization (HQ) module to obtain the optimal bit width automatically under a constrained condition where a threshold for distribution distance between the center and samples is applied in the weight value search space. Third, in order to improve information transformation, we propose a one-to-one self-teaching (OST) module to give the student network a ability of self-judgment. A switch control machine (SCM) builds a bridge between the student network and teacher network in the same location to help the teacher to reduce wrong guidance and impart vital knowledge to the student. This distillation method allows a model to learn from itself and gain substantial improvement without any additional supervision. Extensive experiments on a multimodal dataset (VEDAI) and single-modality datasets (DOTA, NWPU, and DIOR) show that object detection based on GHOST outperforms the existing detectors. The tiny parameters (<9.7 MB) and Bit-Operations (BOPs) (<2158 G) compared with any remote sensing-based, lightweight or distillation-based algorithms demonstrate the superiority in the lightweight design domain. Our code and model will be released at https://github.com/icey-zhang/GHOST.
translated by 谷歌翻译
Supervised Deep-Learning (DL)-based reconstruction algorithms have shown state-of-the-art results for highly-undersampled dynamic Magnetic Resonance Imaging (MRI) reconstruction. However, the requirement of excessive high-quality ground-truth data hinders their applications due to the generalization problem. Recently, Implicit Neural Representation (INR) has appeared as a powerful DL-based tool for solving the inverse problem by characterizing the attributes of a signal as a continuous function of corresponding coordinates in an unsupervised manner. In this work, we proposed an INR-based method to improve dynamic MRI reconstruction from highly undersampled k-space data, which only takes spatiotemporal coordinates as inputs. Specifically, the proposed INR represents the dynamic MRI images as an implicit function and encodes them into neural networks. The weights of the network are learned from sparsely-acquired (k, t)-space data itself only, without external training datasets or prior images. Benefiting from the strong implicit continuity regularization of INR together with explicit regularization for low-rankness and sparsity, our proposed method outperforms the compared scan-specific methods at various acceleration factors. E.g., experiments on retrospective cardiac cine datasets show an improvement of 5.5 ~ 7.1 dB in PSNR for extremely high accelerations (up to 41.6-fold). The high-quality and inner continuity of the images provided by INR has great potential to further improve the spatiotemporal resolution of dynamic MRI, without the need of any training data.
translated by 谷歌翻译
Federated learning (FL) is a method to train model with distributed data from numerous participants such as IoT devices. It inherently assumes a uniform capacity among participants. However, participants have diverse computational resources in practice due to different conditions such as different energy budgets or executing parallel unrelated tasks. It is necessary to reduce the computation overhead for participants with inefficient computational resources, otherwise they would be unable to finish the full training process. To address the computation heterogeneity, in this paper we propose a strategy for estimating local models without computationally intensive iterations. Based on it, we propose Computationally Customized Federated Learning (CCFL), which allows each participant to determine whether to perform conventional local training or model estimation in each round based on its current computational resources. Both theoretical analysis and exhaustive experiments indicate that CCFL has the same convergence rate as FedAvg without resource constraints. Furthermore, CCFL can be viewed of a computation-efficient extension of FedAvg that retains model performance while considerably reducing computation overhead.
translated by 谷歌翻译